Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 55
Filter
1.
Comput Intell Neurosci ; 2023: 1305583, 2023.
Article in English | MEDLINE | ID: covidwho-2194246

ABSTRACT

Diabetic retinopathy (DR) is a common retinal vascular disease, which can cause severe visual impairment. It is of great clinical significance to use fundus images for intelligent diagnosis of DR. In this paper, an intelligent DR classification model of fundus images is proposed. This method can detect all the five stages of DR, including of no DR, mild, moderate, severe, and proliferative. This model is composed of two key modules. FEB, feature extraction block, is mainly used for feature extraction of fundus images, and GPB, grading prediction block, is used to classify the five stages of DR. The transformer in the FEB has more fine-grained attention that can pay more attention to retinal hemorrhage and exudate areas. The residual attention in the GPB can effectively capture different spatial regions occupied by different classes of objects. Comprehensive experiments on DDR datasets well demonstrate the superiority of our method, and compared with the benchmark method, our method has achieved competitive performance.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Humans , Diabetic Retinopathy/diagnostic imaging , Fundus Oculi , Image Interpretation, Computer-Assisted/methods , Benchmarking
2.
Sci Rep ; 12(1): 1716, 2022 02 02.
Article in English | MEDLINE | ID: covidwho-1900583

ABSTRACT

The rapid evolution of the novel coronavirus disease (COVID-19) pandemic has resulted in an urgent need for effective clinical tools to reduce transmission and manage severe illness. Numerous teams are quickly developing artificial intelligence approaches to these problems, including using deep learning to predict COVID-19 diagnosis and prognosis from chest computed tomography (CT) imaging data. In this work, we assess the value of aggregated chest CT data for COVID-19 prognosis compared to clinical metadata alone. We develop a novel patient-level algorithm to aggregate the chest CT volume into a 2D representation that can be easily integrated with clinical metadata to distinguish COVID-19 pneumonia from chest CT volumes from healthy participants and participants with other viral pneumonia. Furthermore, we present a multitask model for joint segmentation of different classes of pulmonary lesions present in COVID-19 infected lungs that can outperform individual segmentation models for each task. We directly compare this multitask segmentation approach to combining feature-agnostic volumetric CT classification feature maps with clinical metadata for predicting mortality. We show that the combination of features derived from the chest CT volumes improve the AUC performance to 0.80 from the 0.52 obtained by using patients' clinical data alone. These approaches enable the automated extraction of clinically relevant features from chest CT volumes for risk stratification of COVID-19 patients.


Subject(s)
COVID-19/diagnosis , COVID-19/virology , Deep Learning , SARS-CoV-2 , Thorax/diagnostic imaging , Thorax/pathology , Tomography, X-Ray Computed , Algorithms , COVID-19/mortality , Databases, Genetic , Humans , Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Prognosis , Tomography, X-Ray Computed/methods , Tomography, X-Ray Computed/standards
3.
AJR Am J Roentgenol ; 218(2): 270-278, 2022 02.
Article in English | MEDLINE | ID: covidwho-1793148

ABSTRACT

BACKGROUND. The need for second visits between screening mammography and diagnostic imaging contributes to disparities in the time to breast cancer diagnosis. During the COVID-19 pandemic, an immediate-read screening mammography program was implemented to reduce patient visits and decrease time to diagnostic imaging. OBJECTIVE. The purpose of this study was to measure the impact of an immediate-read screening program with focus on disparities in same-day diagnostic imaging after abnormal findings are made at screening mammography. METHODS. In May 2020, an immediate-read screening program was implemented whereby a dedicated breast imaging radiologist interpreted all screening mammograms in real time; patients received results before discharge; and efforts were made to perform any recommended diagnostic imaging during the visit (performed by different radiologists). Screening mammographic examinations performed from June 1, 2019, through October 31, 2019 (preimplementation period), and from June 1, 2020, through October 31, 2020 (postimplementation period), were retrospectively identified. Patient characteristics were recorded from the electronic medical record. Multivariable logistic regression models incorporating patient age, race and ethnicity, language, and insurance type were estimated to identify factors associated with same-day diagnostic imaging. Screening metrics were compared between periods. RESULTS. A total of 8222 preimplementation and 7235 postimplementation screening examinations were included; 521 patients had abnormal screening findings before implementation, and 359 after implementation. Before implementation, 14.8% of patients underwent same-day diagnostic imaging after abnormal screening mammograms. This percentage increased to 60.7% after implementation. Before implementation, patients who identified their race as other than White had significantly lower odds than patients who identified their race as White of undergoing same-day diagnostic imaging after receiving abnormal screening results (adjusted odds ratio, 0.30; 95% CI, 0.10-0.86; p = .03). After implementation, the odds of same-day diagnostic imaging were not significantly different between patients of other races and White patients (adjusted odds ratio, 0.92; 95% CI, 0.50-1.71; p = .80). After implementation, there was no significant difference in race and ethnicity between patients who underwent and those who did not undergo same-day diagnostic imaging after receiving abnormal results of screening mammography (p > .05). The rate of abnormal interpretation was significantly lower after than it was before implementation (5.0% vs 6.3%; p < .001). Cancer detection rate and PPV1 (PPV based on positive findings at screening examination) were not significantly different before and after implementation (p > .05). CONCLUSION. Implementation of the immediate-read screening mammography program reduced prior racial and ethnic disparities in same-day diagnostic imaging after abnormal screening mammograms. CLINICAL IMPACT. An immediate-read screening program provides a new paradigm for improved screening mammography workflow that allows more rapid diagnostic workup with reduced disparities in care.


Subject(s)
Breast Neoplasms/diagnostic imaging , COVID-19/prevention & control , Delayed Diagnosis/prevention & control , Healthcare Disparities/statistics & numerical data , Image Interpretation, Computer-Assisted/methods , Mammography/methods , Racial Groups/statistics & numerical data , Adult , Breast/diagnostic imaging , Female , Humans , Middle Aged , Pandemics , Retrospective Studies , SARS-CoV-2 , Time
4.
Sensors (Basel) ; 22(6)2022 Mar 18.
Article in English | MEDLINE | ID: covidwho-1765834

ABSTRACT

Blood cancer, or leukemia, has a negative impact on the blood and/or bone marrow of children and adults. Acute lymphocytic leukemia (ALL) and acute myeloid leukemia (AML) are two sub-types of acute leukemia. The Internet of Medical Things (IoMT) and artificial intelligence have allowed for the development of advanced technologies to assist in recently introduced medical procedures. Hence, in this paper, we propose a new intelligent IoMT framework for the automated classification of acute leukemias using microscopic blood images. The workflow of our proposed framework includes three main stages, as follows. First, blood samples are collected by wireless digital microscopy and sent to a cloud server. Second, the cloud server carries out automatic identification of the blood conditions-either leukemias or healthy-utilizing our developed generative adversarial network (GAN) classifier. Finally, the classification results are sent to a hematologist for medical approval. The developed GAN classifier was successfully evaluated on two public data sets: ALL-IDB and ASH image bank. It achieved the best accuracy scores of 98.67% for binary classification (ALL or healthy) and 95.5% for multi-class classification (ALL, AML, and normal blood cells), when compared with existing state-of-the-art methods. The results of this study demonstrate the feasibility of our proposed IoMT framework for automated diagnosis of acute leukemia tests. Clinical realization of this blood diagnosis system is our future work.


Subject(s)
Internet of Things , Leukemia , Algorithms , Artificial Intelligence , Child , Humans , Image Interpretation, Computer-Assisted/methods
5.
PLoS One ; 17(1): e0262052, 2022.
Article in English | MEDLINE | ID: covidwho-1643253

ABSTRACT

The COVID-19 epidemic has a catastrophic impact on global well-being and public health. More than 27 million confirmed cases have been reported worldwide until now. Due to the growing number of confirmed cases, and challenges to the variations of the COVID-19, timely and accurate classification of healthy and infected patients is essential to control and treat COVID-19. We aim to develop a deep learning-based system for the persuasive classification and reliable detection of COVID-19 using chest radiography. Firstly, we evaluate the performance of various state-of-the-art convolutional neural networks (CNNs) proposed over recent years for medical image classification. Secondly, we develop and train CNN from scratch. In both cases, we use a public X-Ray dataset for training and validation purposes. For transfer learning, we obtain 100% accuracy for binary classification (i.e., Normal/COVID-19) and 87.50% accuracy for tertiary classification (Normal/COVID-19/Pneumonia). With the CNN trained from scratch, we achieve 93.75% accuracy for tertiary classification. In the case of transfer learning, the classification accuracy drops with the increased number of classes. The results are demonstrated by comprehensive receiver operating characteristics (ROC) and confusion metric analysis with 10-fold cross-validation.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Image Interpretation, Computer-Assisted/methods , Pneumonia, Bacterial/diagnostic imaging , COVID-19/pathology , COVID-19/virology , Case-Control Studies , Databases, Factual , Diagnosis, Differential , Female , Humans , Male , Pneumonia, Bacterial/pathology , Pneumonia, Bacterial/virology , ROC Curve , Radiography, Thoracic , SARS-CoV-2/pathogenicity
6.
PLoS One ; 16(3): e0247839, 2021.
Article in English | MEDLINE | ID: covidwho-1574949

ABSTRACT

As SARS-CoV-2 has spread quickly throughout the world, the scientific community has spent major efforts on better understanding the characteristics of the virus and possible means to prevent, diagnose, and treat COVID-19. A valid approach presented in the literature is to develop an image-based method to support COVID-19 diagnosis using convolutional neural networks (CNN). Because the availability of radiological data is rather limited due to the novelty of COVID-19, several methodologies consider reduced datasets, which may be inadequate, biasing the model. Here, we performed an analysis combining six different databases using chest X-ray images from open datasets to distinguish images of infected patients while differentiating COVID-19 and pneumonia from 'no-findings' images. In addition, the performance of models created from fewer databases, which may imperceptibly overestimate their results, is discussed. Two CNN-based architectures were created to process images of different sizes (512 × 512, 768 × 768, 1024 × 1024, and 1536 × 1536). Our best model achieved a balanced accuracy (BA) of 87.7% in predicting one of the three classes ('no-findings', 'COVID-19', and 'pneumonia') and a specific balanced precision of 97.0% for 'COVID-19' class. We also provided binary classification with a precision of 91.0% for detection of sick patients (i.e., with COVID-19 or pneumonia) and 98.4% for COVID-19 detection (i.e., differentiating from 'no-findings' or 'pneumonia'). Indeed, despite we achieved an unrealistic 97.2% BA performance for one specific case, the proposed methodology of using multiple databases achieved better and less inflated results than from models with specific image datasets for training. Thus, this framework is promising for a low-cost, fast, and noninvasive means to support the diagnosis of COVID-19.


Subject(s)
COVID-19/diagnostic imaging , Databases, Factual , Neural Networks, Computer , Pneumonia/diagnostic imaging , Algorithms , Bias , Deep Learning , Humans , Image Interpretation, Computer-Assisted , Radiography, Thoracic
7.
Comput Math Methods Med ; 2021: 8081276, 2021.
Article in English | MEDLINE | ID: covidwho-1435106

ABSTRACT

The use of Internet technology has led to the availability of different multimedia data in various formats. The unapproved customers misuse multimedia information by conveying them on various web objections to acquire cash deceptively without the first copyright holder's intervention. Due to the rise in cases of COVID-19, lots of patient information are leaked without their knowledge, so an intelligent technique is required to protect the integrity of patient data by placing an invisible signal known as a watermark on the medical images. In this paper, a new method of watermarking is proposed on both standard and medical images. The paper addresses the use of digital rights management in medical field applications such as embedding the watermark in medical images related to neurodegenerative disorders, lung disorders, and heart issues. The various quality parameters are used to figure out the evaluation of the developed method. In addition, the testing of the watermarking scheme is done by applying various signal processing attacks.


Subject(s)
COVID-19/diagnostic imaging , Computer Security , Neurodegenerative Diseases/diagnostic imaging , Neurodegenerative Diseases/genetics , Algorithms , Computational Biology/methods , Humans , Image Interpretation, Computer-Assisted/methods , Internet , Models, Statistical
8.
IEEE J Transl Eng Health Med ; 9: 1800209, 2021.
Article in English | MEDLINE | ID: covidwho-1388111

ABSTRACT

Background: Accurate and fast diagnosis of COVID-19 is very important to manage the medical conditions of affected persons. The task is challenging owing to shortage and ineffectiveness of clinical testing kits. However, the existing problems can be improved by employing computational intelligent techniques on radiological images like CT-Scans (Computed Tomography) of lungs. Extensive research has been reported using deep learning models to diagnose the severity of COVID-19 from CT images. This has undoubtedly minimized the manual involvement in abnormality identification but reported detection accuracy is limited. Methods: The present work proposes an expert model based on deep features and Parameter Free BAT (PF-BAT) optimized Fuzzy K-nearest neighbor (PF-FKNN) classifier to diagnose novel coronavirus. In this proposed model, features are extracted from the fully connected layer of transfer learned MobileNetv2 followed by FKNN training. The hyperparameters of FKNN are fine-tuned using PF-BAT. Results: The experimental results on the benchmark COVID CT scan data reveal that the proposed algorithm attains a validation accuracy of 99.38% which is better than the existing state-of-the-art methods proposed in past. Conclusion: The proposed model will help in timely and accurate identification of the coronavirus at the various phases. Such kind of rapid diagnosis will assist clinicians to manage the healthcare condition of patients well and will help in speedy recovery from the diseases. Clinical and Translational Impact Statement - The proposed automated system can provide accurate and fast detection of COVID-19 signature from lung radiographs. Also, the usage of lighter MobileNetv2 architecture makes it practical for deployment in real-time.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Image Interpretation, Computer-Assisted/methods , Algorithms , Female , Humans , Lung/diagnostic imaging , Male , SARS-CoV-2 , Tomography, X-Ray Computed
9.
Ann Diagn Pathol ; 54: 151807, 2021 Oct.
Article in English | MEDLINE | ID: covidwho-1356125

ABSTRACT

Digital pathology has become an integral part of pathology education in recent years, particularly during the COVID-19 pandemic, for its potential utility as a teaching tool that augments the traditional 1-to-1 sign-out experience. Herein, we evaluate the utility of whole slide imaging (WSI) in reducing diagnostic errors in pigmented cutaneous lesions by pathology fellows without subspecialty training in dermatopathology. Ten cases of 4 pigmented cutaneous lesions commonly encountered by general pathologists were selected. Corresponding whole slide images were distributed to our fellows, along with two sets of online surveys, each composed of 10 multiple-choice questions with 4 answers. Identical cases were used for both surveys to minimize variability in trainees' scores depending on the perceived level of difficulty, with the second set being distributed after random shuffling. Brief image-based teaching slides as self-assessment tool were provided to trainees between each survey. Pre- and post-self-assessment scores were analyzed. 61% (17/28) and 39% (11/28) of fellows completed the first and second surveys, respectively. The mean score in the first survey was 5.2/10. The mean score in the second survey following self-assessment increased to 7.2/10. 64% (7/11) of trainees showed an improvement in their scores, with 1 trainee improving his/her score by 8 points. No fellow scored less post-self-assessment than on the initial assessment. The difference in individual scores between two surveys was statistically significant (p = 0.003). Our study demonstrates the utility of WSI-based self-assessment learning as a source of improving diagnostic skills of pathology trainees in a short period of time.


Subject(s)
COVID-19/prevention & control , Clinical Competence , Education, Distance/methods , Education, Medical, Graduate/methods , Image Interpretation, Computer-Assisted/methods , Pathology, Clinical/education , Skin Diseases/pathology , Diagnostic Errors/prevention & control , Fellowships and Scholarships , Humans , Pathology, Clinical/methods , Skin Diseases/diagnosis , United States
10.
IEEE Trans Neural Netw Learn Syst ; 32(9): 3786-3797, 2021 09.
Article in English | MEDLINE | ID: covidwho-1348109

ABSTRACT

Medical imaging technologies, including computed tomography (CT) or chest X-Ray (CXR), are largely employed to facilitate the diagnosis of the COVID-19. Since manual report writing is usually too time-consuming, a more intelligent auxiliary medical system that could generate medical reports automatically and immediately is urgently needed. In this article, we propose to use the medical visual language BERT (Medical-VLBERT) model to identify the abnormality on the COVID-19 scans and generate the medical report automatically based on the detected lesion regions. To produce more accurate medical reports and minimize the visual-and-linguistic differences, this model adopts an alternate learning strategy with two procedures that are knowledge pretraining and transferring. To be more precise, the knowledge pretraining procedure is to memorize the knowledge from medical texts, while the transferring procedure is to utilize the acquired knowledge for professional medical sentences generations through observations of medical images. In practice, for automatic medical report generation on the COVID-19 cases, we constructed a dataset of 368 medical findings in Chinese and 1104 chest CT scans from The First Affiliated Hospital of Jinan University, Guangzhou, China, and The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China. Besides, to alleviate the insufficiency of the COVID-19 training samples, our model was first trained on the large-scale Chinese CX-CHR dataset and then transferred to the COVID-19 CT dataset for further fine-tuning. The experimental results showed that Medical-VLBERT achieved state-of-the-art performances on terminology prediction and report generation with the Chinese COVID-19 CT dataset and the CX-CHR dataset. The Chinese COVID-19 CT dataset is available at https://covid19ct.github.io/.


Subject(s)
COVID-19/diagnostic imaging , Machine Learning , Research Report/standards , Algorithms , Artificial Intelligence , China , Humans , Image Interpretation, Computer-Assisted , Terminology as Topic , Tomography, X-Ray Computed , Transfer, Psychology , Writing
11.
J Biomol Struct Dyn ; 39(10): 3615-3626, 2021 07.
Article in English | MEDLINE | ID: covidwho-1343546

ABSTRACT

Coronavirus is still the leading cause of death worldwide. There are a set number of COVID-19 test units accessible in emergency clinics because of the expanding cases daily. Therefore, it is important to implement an automatic detection and classification system as a speedy elective finding choice to forestall COVID-19 spreading among individuals. Medical images analysis is one of the most promising research areas, it provides facilities for diagnosis and making decisions of a number of diseases such as Coronavirus. This paper conducts a comparative study of the use of the recent deep learning models (VGG16, VGG19, DenseNet201, Inception_ResNet_V2, Inception_V3, Resnet50, and MobileNet_V2) to deal with detection and classification of coronavirus pneumonia. The experiments were conducted using chest X-ray & CT dataset of 6087 images (2780 images of bacterial pneumonia, 1493 of coronavirus, 231 of Covid19, and 1583 normal) and confusion matrices are used to evaluate model performances. Results found out that the use of inception_Resnet_V2 and Densnet201 provide better results compared to other models used in this work (92.18% accuracy for Inception-ResNetV2 and 88.09% accuracy for Densnet201).Communicated by Ramaswamy H. Sarma.


Subject(s)
COVID-19 , Deep Learning , Diagnosis, Computer-Assisted , COVID-19/diagnostic imaging , Humans , Image Interpretation, Computer-Assisted , X-Rays
12.
J Med Internet Res ; 23(7): e26995, 2021 07 16.
Article in English | MEDLINE | ID: covidwho-1341580

ABSTRACT

BACKGROUND: Papers on COVID-19 are being published at a high rate and concern many different topics. Innovative tools are needed to aid researchers to find patterns in this vast amount of literature to identify subsets of interest in an automated fashion. OBJECTIVE: We present a new online software resource with a friendly user interface that allows users to query and interact with visual representations of relationships between publications. METHODS: We publicly released an application called PLATIPUS (Publication Literature Analysis and Text Interaction Platform for User Studies) that allows researchers to interact with literature supplied by COVIDScholar via a visual analytics platform. This tool contains standard filtering capabilities based on authors, journals, high-level categories, and various research-specific details via natural language processing and dozens of customizable visualizations that dynamically update from a researcher's query. RESULTS: PLATIPUS is available online and currently links to over 100,000 publications and is still growing. This application has the potential to transform how COVID-19 researchers use public literature to enable their research. CONCLUSIONS: The PLATIPUS application provides the end user with a variety of ways to search, filter, and visualize over 100,00 COVID-19 publications.


Subject(s)
COVID-19 , Image Interpretation, Computer-Assisted , Information Storage and Retrieval , SARS-CoV-2 , Humans , Natural Language Processing , Software , User-Computer Interface
13.
Comput Math Methods Med ; 2021: 2485934, 2021.
Article in English | MEDLINE | ID: covidwho-1325174

ABSTRACT

With the continuous improvement of human living standards, dietary habits are constantly changing, which brings various bowel problems. Among them, the morbidity and mortality rates of colorectal cancer have maintained a significant upward trend. In recent years, the application of deep learning in the medical field has become increasingly spread aboard and deep. In a colonoscopy, Artificial Intelligence based on deep learning is mainly used to assist in the detection of colorectal polyps and the classification of colorectal lesions. But when it comes to classification, it can lead to confusion between polyps and other diseases. In order to accurately diagnose various diseases in the intestines and improve the classification accuracy of polyps, this work proposes a multiclassification method for medical colonoscopy images based on deep learning, which mainly classifies the four conditions of polyps, inflammation, tumor, and normal. In view of the relatively small number of data sets, the network firstly trained by transfer learning on ImageNet was used as the pretraining model, and the prior knowledge learned from the source domain learning task was applied to the classification task about intestinal illnesses. Then, we fine-tune the model to make it more suitable for the task of intestinal classification by our data sets. Finally, the model is applied to the multiclassification of medical colonoscopy images. Experimental results show that the method in this work can significantly improve the recognition rate of polyps while ensuring the classification accuracy of other categories, so as to assist the doctor in the diagnosis of surgical resection.


Subject(s)
Colonoscopy/statistics & numerical data , Colorectal Neoplasms/classification , Colorectal Neoplasms/diagnostic imaging , Deep Learning , Artificial Intelligence , Colonic Polyps/classification , Colonic Polyps/diagnostic imaging , Computational Biology , Humans , Image Interpretation, Computer-Assisted/statistics & numerical data , Neural Networks, Computer
15.
Comput Math Methods Med ; 2021: 9998379, 2021.
Article in English | MEDLINE | ID: covidwho-1314186

ABSTRACT

In recent years, computerized biomedical imaging and analysis have become extremely promising, more interesting, and highly beneficial. They provide remarkable information in the diagnoses of skin lesions. There have been developments in modern diagnostic systems that can help detect melanoma in its early stages to save the lives of many people. There is also a significant growth in the design of computer-aided diagnosis (CAD) systems using advanced artificial intelligence. The purpose of the present research is to develop a system to diagnose skin cancer, one that will lead to a high level of detection of the skin cancer. The proposed system was developed using deep learning and traditional artificial intelligence machine learning algorithms. The dermoscopy images were collected from the PH2 and ISIC 2018 in order to examine the diagnose system. The developed system is divided into feature-based and deep leaning. The feature-based system was developed based on feature-extracting methods. In order to segment the lesion from dermoscopy images, the active contour method was proposed. These skin lesions were processed using hybrid feature extractions, namely, the Local Binary Pattern (LBP) and Gray Level Co-occurrence Matrix (GLCM) methods to extract the texture features. The obtained features were then processed using the artificial neural network (ANNs) algorithm. In the second system, the convolutional neural network (CNNs) algorithm was applied for the efficient classification of skin diseases; the CNNs were pretrained using large AlexNet and ResNet50 transfer learning models. The experimental results show that the proposed method outperformed the state-of-art methods for HP2 and ISIC 2018 datasets. Standard evaluation metrics like accuracy, specificity, sensitivity, precision, recall, and F-score were employed to evaluate the results of the two proposed systems. The ANN model achieved the highest accuracy for PH2 (97.50%) and ISIC 2018 (98.35%) compared with the CNN model. The evaluation and comparison, proposed systems for classification and detection of melanoma are presented.


Subject(s)
Diagnosis, Computer-Assisted/methods , Melanoma/diagnostic imaging , Skin Neoplasms/diagnostic imaging , Algorithms , Artificial Intelligence , Computational Biology , Databases, Factual/statistics & numerical data , Deep Learning , Dermoscopy , Diagnosis, Computer-Assisted/statistics & numerical data , Early Detection of Cancer/methods , Early Detection of Cancer/statistics & numerical data , Humans , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/statistics & numerical data , Neural Networks, Computer , Skin Diseases/classification , Skin Diseases/diagnostic imaging
16.
IEEE Trans Ultrason Ferroelectr Freq Control ; 68(7): 2507-2515, 2021 07.
Article in English | MEDLINE | ID: covidwho-1288239

ABSTRACT

As being radiation-free, portable, and capable of repetitive use, ultrasonography is playing an important role in diagnosing and evaluating the COVID-19 Pneumonia (PN) in this epidemic. By virtue of lung ultrasound scores (LUSS), lung ultrasound (LUS) was used to estimate the excessive lung fluid that is an important clinical manifestation of COVID-19 PN, with high sensitivity and specificity. However, as a qualitative method, LUSS suffered from large interobserver variations and requirement for experienced clinicians. Considering this limitation, we developed a quantitative and automatic lung ultrasound scoring system for evaluating the COVID-19 PN. A total of 1527 ultrasound images prospectively collected from 31 COVID-19 PN patients with different clinical conditions were evaluated and scored with LUSS by experienced clinicians. All images were processed via a series of computer-aided analysis, including curve-to-linear conversion, pleural line detection, region-of-interest (ROI) selection, and feature extraction. A collection of 28 features extracted from the ROI was specifically defined for mimicking the LUSS. Multilayer fully connected neural networks, support vector machines, and decision trees were developed for scoring LUS images using the fivefold cross validation. The model with 128×256 two fully connected layers gave the best accuracy of 87%. It is concluded that the proposed method could assess the ultrasound images by assigning LUSS automatically with high accuracy, potentially applicable to the clinics.


Subject(s)
COVID-19/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Lung/diagnostic imaging , Neural Networks, Computer , Ultrasonography/methods , Adult , Aged , Female , Humans , Male , Middle Aged , SARS-CoV-2
17.
IEEE Trans Ultrason Ferroelectr Freq Control ; 67(11): 2258-2264, 2020 11.
Article in English | MEDLINE | ID: covidwho-1284995

ABSTRACT

Lung ultrasound (LUS) is a practical tool for lung diagnosis when computer tomography (CT) is not available. Recent findings suggest that LUS diagnosis is highly advantageous because of its mobility and correlation with radiological findings for viral pneumonia. Simple models for both educational evaluation and technical evaluation are needed. Therefore, this work investigates the usability of a large animal model under aspects of LUS features of viral pneumonia using saline one lung flooding. Six pigs were intubated with a double-lumen tube, and the left lung was instilled with saline. During the instillation of up to 12.5 ml/kg, the sonographic features were assessed. All features present during viral pneumonia were found, such as B-lines, white lung syndrome, pleural thickening, and the formation of pleural consolidations. Sonographic findings correlate well with current LUS scores for COVID19. The scores of 1, 2, and 3 were dominantly present at 1-4-, 4-8-, and 8-12-ml/kg saline instillation, respectively. The noninfective animal model can be used for further investigation of the LUS features and can serve in education, by helping with the appropriate handling of LUS in clinical practice during management of viral pneumonia.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Lung , Pneumonia, Viral , Ultrasonography/methods , Animals , COVID-19 , Female , Lung/diagnostic imaging , Lung/pathology , Pneumonia, Viral/diagnostic imaging , Pneumonia, Viral/pathology , Swine
18.
Ann Nucl Med ; 35(10): 1117-1125, 2021 Oct.
Article in English | MEDLINE | ID: covidwho-1281331

ABSTRACT

OBJECTIVE: Pulmonary embolism is a severe source of mortality and morbidity in patients with severe and critical coronavirus disease 2019. It is not yet clear whether the tendency to thrombosis is increased in the mild-to-moderate course of COVID-19. Our research aims to show the clinical benefit of Q-SPECT/CT in diagnosing PD in outpatients treated with mild-to-moderate course of COVID-19 and to determine the frequency of perfusion defects in these patients having relatively lower risk. METHODS: All patients who underwent Q-SPECT/CT with suspicion of embolism were examined retrospectively. Only patients with low clinical probability and mild-to-moderate course of COVID-19 for PE were included in the study. The patients were evaluated comparatively as those with and without perfusion defects. Patients were divided into laboratory suspicion, clinical suspicion, or clinical and laboratory suspicion. RESULTS: In outpatients with mild-to-moderate COVID-19 with low clinical probability for PE, PD without CT abnormality was detected with a rate of 36.6% with Q-SPECT/CT performed for complaints of high D-dimer and/or dyspnea. None of the patients had PD at more proximal level than the segment level. PD with no concomitant CT abnormality was observed with a rate of 56.5% in patients with both clinical and laboratory suspicion. For D-dimer = 0.5 mg/dL cut-off sensitivity is 85%, for D-dimer = 1.5 mg/dL cut-off specificity 81%. CONCLUSION: Thrombosis tendency is also present in outpatients with mild-to-moderate COVID-19, and these patients should also be offered anticoagulant prophylaxis during the COVID-19 period.


Subject(s)
COVID-19/diagnostic imaging , Perfusion Imaging/methods , Pulmonary Embolism/diagnostic imaging , SARS-CoV-2/metabolism , Single Photon Emission Computed Tomography Computed Tomography/methods , Adult , Aged , Aged, 80 and over , Dyspnea/metabolism , Female , Fibrin Fibrinogen Degradation Products/metabolism , Humans , Image Interpretation, Computer-Assisted , Lung , Male , Middle Aged , Multimodal Imaging , Probability , Reproducibility of Results , Retrospective Studies , Time Factors
20.
IEEE Trans Ultrason Ferroelectr Freq Control ; 68(6): 2023-2037, 2021 06.
Article in English | MEDLINE | ID: covidwho-1243581

ABSTRACT

Lung ultrasound (US) imaging has the potential to be an effective point-of-care test for detection of COVID-19, due to its ease of operation with minimal personal protection equipment along with easy disinfection. The current state-of-the-art deep learning models for detection of COVID-19 are heavy models that may not be easy to deploy in commonly utilized mobile platforms in point-of-care testing. In this work, we develop a lightweight mobile friendly efficient deep learning model for detection of COVID-19 using lung US images. Three different classes including COVID-19, pneumonia, and healthy were included in this task. The developed network, named as Mini-COVIDNet, was bench-marked with other lightweight neural network models along with state-of-the-art heavy model. It was shown that the proposed network can achieve the highest accuracy of 83.2% and requires a training time of only 24 min. The proposed Mini-COVIDNet has 4.39 times less number of parameters in the network compared to its next best performing network and requires a memory of only 51.29 MB, making the point-of-care detection of COVID-19 using lung US imaging plausible on a mobile platform. Deployment of these lightweight networks on embedded platforms shows that the proposed Mini-COVIDNet is highly versatile and provides optimal performance in terms of being accurate as well as having latency in the same order as other lightweight networks. The developed lightweight models are available at https://github.com/navchetan-awasthi/Mini-COVIDNet.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Image Interpretation, Computer-Assisted/methods , Point-of-Care Systems , Ultrasonography/methods , Humans , SARS-CoV-2
SELECTION OF CITATIONS
SEARCH DETAIL